The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
The foundation models have recently shown excellent performance on a variety of downstream tasks in computer vision. However, most existing vision foundation models simply focus on image-level pretraining and adpation, which are limited for dynamic and complex video-level understanding tasks. To fill the gap, we present general video foundation models, InternVideo, by taking advantage of both generative and discriminative self-supervised video learning. Specifically, InternVideo efficiently explores masked video modeling and video-language contrastive learning as the pretraining objectives, and selectively coordinates video representations of these two complementary frameworks in a learnable manner to boost various video applications. Without bells and whistles, InternVideo achieves state-of-the-art performance on 39 video datasets from extensive tasks including video action recognition/detection, video-language alignment, and open-world video applications. Especially, our methods can obtain 91.1% and 77.2% top-1 accuracy on the challenging Kinetics-400 and Something-Something V2 benchmarks, respectively. All of these results effectively show the generality of our InternVideo for video understanding. The code will be released at https://github.com/OpenGVLab/InternVideo .
translated by 谷歌翻译
Full-body reconstruction is a fundamental but challenging task. Owing to the lack of annotated data, the performances of existing methods are largely limited. In this paper, we propose a novel method named Full-body Reconstruction from Part Experts~(FuRPE) to tackle this issue. In FuRPE, the network is trained using pseudo labels and features generated from part-experts. An simple yet effective pseudo ground-truth selection scheme is proposed to extract high-quality pseudo labels. In this way, a large-scale of existing human body reconstruction datasets can be leveraged and contribute to the model training. In addition, an exponential moving average training strategy is introduced to train the network in a self-supervised manner, further boosting the performance of the model. Extensive experiments on several widely used datasets demonstrate the effectiveness of our method over the baseline. Our method achieves the state-of-the-art performance. Code will be publicly available for further research.
translated by 谷歌翻译
We seek to impose linear, equality constraints in feedforward neural networks. As top layer predictors are usually nonlinear, this is a difficult task if we seek to deploy standard convex optimization methods and strong duality. To overcome this, we introduce a new saddle-point Lagrangian with auxiliary predictor variables on which constraints are imposed. Elimination of the auxiliary variables leads to a dual minimization problem on the Lagrange multipliers introduced to satisfy the linear constraints. This minimization problem is combined with the standard learning problem on the weight matrices. From this theoretical line of development, we obtain the surprising interpretation of Lagrange parameters as additional, penultimate layer hidden units with fixed weights stemming from the constraints. Consequently, standard minimization approaches can be used despite the inclusion of Lagrange parameters -- a very satisfying, albeit unexpected, discovery. Examples ranging from multi-label classification to constrained autoencoders are envisaged in the future.
translated by 谷歌翻译
Image super-resolution is a common task on mobile and IoT devices, where one often needs to upscale and enhance low-resolution images and video frames. While numerous solutions have been proposed for this problem in the past, they are usually not compatible with low-power mobile NPUs having many computational and memory constraints. In this Mobile AI challenge, we address this problem and propose the participants to design an efficient quantized image super-resolution solution that can demonstrate a real-time performance on mobile NPUs. The participants were provided with the DIV2K dataset and trained INT8 models to do a high-quality 3X image upscaling. The runtime of all models was evaluated on the Synaptics VS680 Smart Home board with a dedicated edge NPU capable of accelerating quantized neural networks. All proposed solutions are fully compatible with the above NPU, demonstrating an up to 60 FPS rate when reconstructing Full HD resolution images. A detailed description of all models developed in the challenge is provided in this paper.
translated by 谷歌翻译
使用具有固定尺度的图像超分辨率(SR)的深度学习技术,已经取得了巨大的成功。为了提高其现实世界的适用性,还提出了许多模型来恢复具有任意尺度因子的SR图像,包括不对称的图像,其中图像沿水平和垂直方向大小为不同的尺度。尽管大多数模型仅针对单向上升尺度任务进行了优化,同时假设针对低分辨率(LR)输入的预定义的缩小内核,但基于可逆神经网络(INN)的最新模型能够通过优化降低和降低尺度和降低范围的降低准确性来显着提高上升的准确性共同。但是,受创新体系结构的限制,它被限制在固定的整数尺度因素上,并且需要每个量表的一个模型。在不增加模型复杂性的情况下,提出了一个简单有效的可逆重新恢复网络(IARN),以通过在这项工作中仅训练一个模型来实现任意图像重新缩放。使用创新的组件,例如位置感知量表编码和先发制通道拆分,该网络被优化,以将不可固化的重新恢复周期转换为有效的可逆过程。证明它可以在双向任意重新缩放中实现最新的(SOTA)性能,而不会在LR输出中损害感知质量。还可以证明,使用相同的网络体系结构在不对称尺度的测试上表现良好。
translated by 谷歌翻译
基于视频的人重新识别(REID)旨在识别多个非重叠摄像机的给定的行人视频序列。为了汇总视频样本的时间和空间特征,引入了图神经网络(GNN)。但是,现有的基于图的模型(例如STGCN)在节点功能上执行\ textIt {mean}/\ textit {max boming}以获取图表表示,该图表忽略了图形拓扑和节点的重要性。在本文中,我们建议图形池网络(GPNET)学习视频检索的多粒度图表示,其中实现了\ textit {Graph boming layer},以简化图形。我们首先构建了一个多粒图,其节点特征表示由骨架学到的图像嵌入,并且在颞和欧几里得邻域节点之间建立了边缘。然后,我们实现多个图形卷积层以在图上执行邻域聚集。为了下图,我们提出了一个多头全注意图池(MHFAPOOL)层,该图集合了现有节点群集和节点选择池的优势。具体而言,MHFAPOOL将全部注意矩阵的主要特征向量作为聚合系数涉及每个汇总节点中的全局图信息。广泛的实验表明,我们的GPNET在四个广泛使用的数据集(即火星,dukemtmc-veneoreid,ilids-vid and Prid-2011)上实现了竞争结果。
translated by 谷歌翻译
现有的基于视频的人重新识别(REID)的方法主要通过功能提取器和功能聚合器来了解给定行人的外观特征。但是,当不同的行人外观相似时,外观模型将失败。考虑到不同的行人具有不同的步行姿势和身体比例,我们建议学习视频检索的外观功能之外的歧视性姿势功能。具体而言,我们实现了一个两分支的体系结构,以单独学习外观功能和姿势功能,然后将它们串联在一起进行推理。为了学习姿势特征,我们首先通过现成的姿势检测器检测到每个框架中的行人姿势,并使用姿势序列构建时间图。然后,我们利用复发图卷积网络(RGCN)来学习时间姿势图的节点嵌入,该姿势图设计了一种全局信息传播机制,以同时实现框内节点的邻域聚集,并在框架间图之间传递消息。最后,我们提出了一种由节点注意和时间注意的双重意见方法,以从节点嵌入中获得时间图表示,其中采用自我注意机制来了解每个节点和每个帧的重要性。我们在三个基于视频的REID数据集(即火星,Dukemtmc和Ilids-Vid)上验证了所提出的方法,其实验结果表明,学习的姿势功能可以有效地改善现有外观模型的性能。
translated by 谷歌翻译
基于模型的步态识别方法通常采用行人步行姿势来识别人类。但是,由于摄像头视图的改变,现有方法并未明确解决人类姿势的较大阶层差异。在本文中,我们建议通过通过低UPPER生成的对抗网络(Lugan)学习全级转换矩阵来为每个单视姿势样本生成多视图姿势序列。通过摄像机成像的先验,我们得出的是,跨视图之间的空间坐标满足了全级矩阵的线性转换,因此,本文采用了对抗性训练来从源姿势学习转换矩阵,并获得目标视图以获得目标。目标姿势序列。为此,我们实现了由图形卷积(GCN)层组成的发电机,完全连接(FC)层和两支分支卷积(CNN)层:GCN层和FC层编码源姿势序列和目标视图,然后是CNN分支最后,分别学习一个三角形基质和上三角基质,最后它们被乘以制定全级转换矩阵。出于对抗训练的目的,我们进一步设计了一个条件鉴别因子,该条件区分姿势序列是真实的还是产生的。为了启用高级相关性学习,我们提出了一个名为Multi尺度超图卷积(HGC)的插件播放模块,以替换基线中的空间图卷积层,该层可以同时模拟联合级别的部分,部分部分 - 水平和身体水平的相关性。在两个大型步态识别数据集(即CASIA-B和OUMVLP置位)上进行的广泛实验表明,我们的方法的表现优于基线模型,并以一个较大的边距基于基于姿势的方法。
translated by 谷歌翻译
变压器是一个变革性框架,可以对顺序数据进行建模,并在广泛的任务上取得了出色的性能,但具有高计算和能源成本。为了提高其效率,一个受欢迎的选择是通过二进制化压缩模型,将浮点值限制为二进制值,以节省资源消耗,这是由于廉价的钻头操作而大大减少了资源。但是,现有的二进制方法仅旨在最大程度地统计地减少输入分布的信息损失,同时忽略了注意机制核心的成对相似性建模。为此,我们提出了一种新的二进制范式,通过二维软式散发范式通过二维的散布量表(称为ecoformer)将原始查询和钥匙映射到锤子空间中的低维二进制代码中。学会了内核化的哈希函数,以以自我监督的方式从注意图中提取的基础真相相似性关系匹配。基于二进制代码的内部乘积与锤距距离以及矩阵乘法的关联性质之间的等效性,我们可以通过将其表示为二进制代码的点产量来近似线性复杂性中的注意力。此外,查询和钥匙的紧凑型二进制表示使我们能够用简单的积累来代替大多数昂贵的多重收益操作,以节省边缘设备上的片上能量足迹。关于视觉和语言任务的广泛实验表明,生态学家始终如一地达到与标准专注的可比性,同时消耗了更少的资源。例如,与标准注意相比,基于PVTV2-B0和Imagenet-1K,EcoFormer可实现73%的能量足迹降低,性能下降仅为0.33%。代码可从https://github.com/ziplab/ecoformer获得。
translated by 谷歌翻译